本文介绍了一种基于AutoEncoder的无监督方法,用于使用机器产生的声音在工业机器中识别异常。使用声音信号的Log-MelspectRoge表示训练所提出的框架。在分类中,我们的假设是,为异常机器计算的重建误差大于正常机器的重建误差,因为只用于训练AutoEncoder的普通机器声音。选择阈值以区分正常和异常的机器。然而,阈值变化为周围条件不同。为了选择适当的阈值,无论周围如何,我们都会提出一个场景分类框架,可以对底层周围分类。因此,无论周围如何,都可以自适应地选择阈值。实验评估是在工业机器的MIMII数据集上进行,即风扇,泵,阀门和滑轨。我们的实验分析表明,利用自适应阈值,性能显着改善,因为仅使用针对给定周围的固定阈值获得的。
translated by 谷歌翻译
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
translated by 谷歌翻译
Can we take a recurrent neural network (RNN) trained to translate between languages and augment it to support a new natural language without retraining the model from scratch? Can we fix the faulty behavior of the RNN by replacing portions associated with the faulty behavior? Recent works on decomposing a fully connected neural network (FCNN) and convolutional neural network (CNN) into modules have shown the value of engineering deep models in this manner, which is standard in traditional SE but foreign for deep learning models. However, prior works focus on the image-based multiclass classification problems and cannot be applied to RNN due to (a) different layer structures, (b) loop structures, (c) different types of input-output architectures, and (d) usage of both nonlinear and logistic activation functions. In this work, we propose the first approach to decompose an RNN into modules. We study different types of RNNs, i.e., Vanilla, LSTM, and GRU. Further, we show how such RNN modules can be reused and replaced in various scenarios. We evaluate our approach against 5 canonical datasets (i.e., Math QA, Brown Corpus, Wiki-toxicity, Clinc OOS, and Tatoeba) and 4 model variants for each dataset. We found that decomposing a trained model has a small cost (Accuracy: -0.6%, BLEU score: +0.10%). Also, the decomposed modules can be reused and replaced without needing to retrain.
translated by 谷歌翻译
Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.
translated by 谷歌翻译
Machine Learning (ML) software has been widely adopted in modern society, with reported fairness implications for minority groups based on race, sex, age, etc. Many recent works have proposed methods to measure and mitigate algorithmic bias in ML models. The existing approaches focus on single classifier-based ML models. However, real-world ML models are often composed of multiple independent or dependent learners in an ensemble (e.g., Random Forest), where the fairness composes in a non-trivial way. How does fairness compose in ensembles? What are the fairness impacts of the learners on the ultimate fairness of the ensemble? Can fair learners result in an unfair ensemble? Furthermore, studies have shown that hyperparameters influence the fairness of ML models. Ensemble hyperparameters are more complex since they affect how learners are combined in different categories of ensembles. Understanding the impact of ensemble hyperparameters on fairness will help programmers design fair ensembles. Today, we do not understand these fully for different ensemble algorithms. In this paper, we comprehensively study popular real-world ensembles: bagging, boosting, stacking and voting. We have developed a benchmark of 168 ensemble models collected from Kaggle on four popular fairness datasets. We use existing fairness metrics to understand the composition of fairness. Our results show that ensembles can be designed to be fairer without using mitigation techniques. We also identify the interplay between fairness composition and data characteristics to guide fair ensemble design. Finally, our benchmark can be leveraged for further research on fair ensembles. To the best of our knowledge, this is one of the first and largest studies on fairness composition in ensembles yet presented in the literature.
translated by 谷歌翻译
不断增加的材料科学文章使得很难从已发表的文献中推断化学结构 - 培训关系。我们使用自然语言处理(NLP)方法从聚合物文献的摘要中自动提取材料属性数据。作为我们管道的组成部分,我们使用240万材料科学摘要培训了一种语言模型的材料,该材料模型在用作文本编码器时,在五分之三命名实体识别数据集中的其他基线模型都优于其他基线模型。使用此管道,我们在60小时内从约130,000个摘要中获得了约300,000个物质记录。分析了提取的数据,分析了各种应用,例如燃料电池,超级电容器和聚合物太阳能电池,以恢复非平凡的见解。通过我们的管道提取的数据可通过https://polymerscholar.org的Web平台提供,该数据可方便地定位摘要中记录的材料属性数据。这项工作证明了自动管道的可行性,该管道从已发布的文献开始,并以一组完整的提取物质属性信息结束。
translated by 谷歌翻译
近年来,《虚假新闻》的数据科学研究已经筹集了很大的势头,可以说是大型公共基准数据集的出现。尽管在媒体研究中,性别偏见是一个遍布新闻媒体的问题,但对性别偏见与虚假新闻之间的关系几乎没有探索。在这项工作中,我们提供了对假新闻的性别偏见的首次实证分析,利用公共基准数据集利用简单且基于透明的词典的方法。我们的分析确定了在三个方面的假新闻中,性别偏见的普遍性增加,即丰富,情感和近端单词。我们分析中的见解提供了一个强有力的论点,即性别偏见需要成为对假新闻研究的重要考虑因素。
translated by 谷歌翻译
自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
在线广告最近已发展成为一个竞争激烈且复杂的数十亿美元行业,广告商在大型和高频上竞标广告插槽。这导致对有效的“自动招标”算法的需求日益增长,这些算法确定了传入查询的投标,以最大程度地提高广告商的目标,但受其指定的约束。这项工作探讨了在日益流行的约束下,为单个价值最大化广告商提供有效的在线算法:返回式增长(ROS)。相对于最佳算法,我们对遗憾进行了量化效率,该算法知道所有查询所有查询都是先验的。我们贡献了一种简单的在线算法,该算法在期望中实现了近乎最佳的遗憾,同时始终尊重指定的ROS约束,当查询的输入顺序为i.i.d.来自某些分布的样本。我们还将结果与Balseiro,Lu和Mirrokni [BLM20]的先前工作相结合,以实现近乎最佳的遗憾,同时尊重ROS和固定的预算限制。我们的算法遵循原始的二重式框架,并使用在线镜像下降(OMD)进行双重更新。但是,我们需要使用非典型的OMD设置,因此需要使用OMD的经典低rebret保证,该保证是用于在线学习中的对抗性环境的,不再存在。尽管如此,在我们的情况下,在更普遍的情况下,在算法设计中应用低纤维动力学的情况下,OMD遇到的梯度可能远非对抗性,但受我们的算法选择的影响。我们利用这一关键见解来显示我们的OMD设置在我们的算法领域中造成了低落的遗憾。
translated by 谷歌翻译
口服食物挑战(OFC)对于准确诊断患者的食物过敏至关重要。但是,患者不愿接受OFC,对于那些这样做的患者,在农村/社区医疗保健环境中,对过敏症患者的使用率有限。通过机器学习方法对OFC结果的预测可以促进在家中食品过敏原的删除,在OFC中改善患者和医师的舒适度,并通过最大程度地减少执行的OFC的数量来节省医疗资源。临床数据是从共同接受1,284个OFC的1,12例患者那里收集的,包括临床因素,包括血清特异性IgE,总IgE,皮肤刺测试(SPTS),症状,性别和年龄。使用这些临床特征,构建了机器学习模型,以预测花生,鸡蛋和牛奶挑战的结果。每种过敏原的最佳性能模型是使用凹入和凸内核(LUCCK)方法创建的,该方法在曲线(AUC)(AUC)下分别用于花生,鸡蛋和牛奶OFC预测为0.76、0.68和0.70, 。通过Shapley添加说明(SHAP)的模型解释表明,特定的IgE以及SPTS的Wheal和Flare值高度预测了OFC结果。该分析的结果表明,机器学习有可能预测OFC结果,并揭示了相关的临床因素进行进一步研究。
translated by 谷歌翻译